perm filename ALPHIL.POX[HAL,HE] blob sn#190988 filedate 1975-12-05 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00018 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00003 00002	\!head1(THE PHILOSOPHY OF THE MANIPULATOR LANGUAGE)
C00005 00003	\!head2(MOVING OBJECTS TO NEW POSITIONS)
C00007 00004	\!head3(Specification of destinations)
C00018 00005	\!head3(Trajectory specification)
C00030 00006	\,This page is here temporarily.  Will go to end.
C00031 00007	\!head3(Indirect arm specification)
C00043 00008	\!head2(GENTLE MOTIONS)
C00046 00009	\!head3(Velocity control)
C00053 00010	\!head3(Force Feedback)
C00062 00011	\!head3(Force Application)
C00070 00012	\!head3(Adaptive Control)
C00074 00013	\!head2(CONTROL OF SEVERAL DEVICES)
C00075 00014	\!head3(Sequential control)
C00080 00015	\!head3(Independent parallel control)
C00086 00016	\!head3(Synchronized parallel control)
C00090 00017	\!head3(Coordinated parallel control)
C00093 00018	\!head2(EASE OF USE)
C00096 ENDMK
C⊗;
\!head1(THE PHILOSOPHY OF THE MANIPULATOR LANGUAGE);


\F0\JDesign decisions have been made throughout the construction of AL.
The philosophy that has guided these decisions arises out of
experience with WAVE, the previous arm language, and with standard
language-design issues.  The intent of this section is to mention
some of the questions that have arisen and to explain the answers we
have found.

For the sake of this discussion, only the low-level manipulation will
be discussed.  The task domain in which we are interested is to cause
the manipulators to perform some set of actions.  These actions are
principally motions of arms to specified locations and
opening/closing of the hands.

These motions have several purposes.  We will discuss the
capabilities needed to accomplish these purposes, and the choices taken
to implement these capabilities.
\.


\,
\!head2(MOVING OBJECTS TO NEW POSITIONS);

\J\F0The first purpose of motions is to move objects in the world to new
positions.  The destination can be defined in several 
ways.  For example, one may want to move a pin into a hole.  The
destination of the pin can be stated as "the place where the hole happens
to be in the real world", "the place where the hole is planned to be", or
even "the place near the planned location which feels most like a hole".
This last specification turns out to be generally the most useful, since
the first is often unavailable to the program, and the second may not
be at all close to where the hole really is.
\.

\,
\!head3(Specification of destinations);

\F0\JIt is necessary to somehow specify the destination of motions to
achieve the purpose of controlled motion of objects.  The arms that we are
using have six degrees of freedom.  Possible choices for the specification
of destinations include sets of six joint angles, location and orientation
vectors with respect to some coordinate system, and the minimization of
some function of observable data (for example, "the coldest spot in the
room" or "the closest approach possible to a given point given that only
joint 3 may be used").  Yet another approach is to name the current
location of the arm and later to use this name as the destination.  A
variant of this is to say "three centimeters above location LOC".  This
assumes that it is possible for the user to somehow get the arm once to the
place where he wants it (manually or otherwise).

The WAVE system implemented several of these methods.  It allowed the user
to position the arm and then have its position read into a variable, and it
also allowed the user to specify location and orientation vectors.  It is
even possible to say "the point three centimeters above LOC".  The
principal limitation with WAVE is that it did not allow vectors and
transforms (location/orientation variables) to be interconverted.  There
was no way to build a transform based on a vector position, and there was
no way to extract, say, the vector of rotation from a transform.  Scalars
(for example, the magnitude of a vector) were absent altogether.  These
lacks meant that it was not in general possible to calculate destinations,
which for WAVE were always transforms.

AL attempts to overcome this limitation by providing a datatype (the
"frame") that refers to a location-orientation.  This datatype was called
the transform, confusingly enough, in WAVE.  Arithmetic is also provided,
and this has led to other datatypes, in particular, vectors, scalars, and
transforms.  AL transforms are operators that can be applied to frames to
yield new frames.  This arithmetic capability allows the user to calculate
destinations, which for AL are always frames.

Once an arithmetic capability is present, it seems that something
approximating a general purpose algorithmic language is desired.  The
question arises whether we wish our language to have general-purpose
control structures to match the arithmetic structures.  WAVE only had
variables in a limited sense, since no arithmetic on them was possible,
there were only transforms and vectors, and there was a fixed limit (about
ten) on the number of variables allowed.  The structure of WAVE allowed
programming by use of macros, and there was a primitive facility for
conditional and unconditional transfer of control (for example, "if the
hand closes farther than 1 centimeter, go back five instructions").
WAVE was never really designed as a language; it was rather a driver
for a set of disparate routines needed to accomplish tasks with the
manipulators.  Its eventual development into something resembling
a programming language indicates how important it is that such a language
form the basis for accomplishing such tasks.

The art of language design has progressed far beyond the rudimentary
(though often sufficient) techniques incorporated into WAVE.  We decided
that on the surface, AL should look like ALGOL, with the conveniences of
block structure, declarations, and statements.  We need at least motion
statements, arithmetic statements, and control-of-execution statements.
ALGOL has certain advantages over other primary types of programming
language (like assembly languages, FORTRAN, LISP) in that it is fairly easy
to follow the intent of pieces of code.  Much more complete discussions of
the virtues and techniques of structured programming can be found in
standard texts.
\.

\JThe restriction that destinations be represented as frame values imposes
some inherent limitations.  It is not possible to ask for one joint alone
to move, even though WAVE allowed this.  The reason for this decision is
that the "one joint at a time" feature was put into WAVE to allow debugging
of individual joint servo code, but was never used to perform actual tasks.
One can argue that just as a human has control over individual joints, it
is inherently proper that the mechanical system have similar control.
There are several answers to this.  One is that control of the manipulators
ought to be done in a device-independent fashion.  This makes for
portability of the system and of programs.  It does not often matter what
the rest of the arm does as long as the grasping device on the end moves in
the specified way.  Another answer is that humans rarely exercise their
control over individual joints, not to mention individual muscles.  This is
usually handled at some low level over which one has little voluntary
control.  A third answer is that it is not our purpose to model human
manipulatory behavior, but rather to automate certain aspects of that
behavior which are of interest to the problem at hand.  The limitation to
"manipulator tip motions" does, however, make it difficult to specify
obstacle avoidance (where you might like to say "keep your elbow in") and
general control of the space through which the whole arm travels.

Another inherent limitation imposed by restricting destinations to be frame
values is that we cannot say "the coldest spot in the room" or "the closest
approach to LOC".  But it is the usual case that destinations are not accurately
known at the time the program is written,
and it would be good to allow the system to learn corrections to
its initial understanding of the locations of objects, and hence, of
destinations.  WAVE allowed this by means of a save/restore cell which
maintained a discrepancy between the planned and the observed location of
places.  AL has a much more sophisticated and powerful facility which
allows all trajectories to be modified immediately before execution to take
into account the planned/observed discrepancy.  However, this does not
assist in specifying "destinationless" trajectories, that is, those which
are solutions to some set of dynamic constraints.  AL also permits explicit
requests for intermediate points along the motion.  These can be used for
collision avoidance (in an admittedly ad-hoc fashion) and to specify a
multitude of destinations at which the arm is not to stop.  Thus circular
motions can be approximated by sufficiently frequent intermediate points.
\.

\,
\!head3(Trajectory specification);

\F0\JThe arm control code must have some understanding of the desired
motion before the arm can be moved.  It is not immediately clear how
detailed this understanding needs to be; perhaps it is sufficient to merely
indicate the destination.  However, some intermediate points (we call them
"via points") might seem required.  Much of this will depend on whether the
desired action is to reach a destination or whether it is to move the arm
along some type of curve.  Even if only acquisition of the destination is
required, perhaps the control code requires some via points for the sake of
making a smooth motion.

Experience with the Scheinman arms has shown that controlled motion
requires that each working joint be serviced at least about 60 times each
second.  Our hardware requires that the control code actually perform this
servoing.  One alternative is that the control code can perform linear
interpolation on the given via points to achieve the necessary servoing
rate.  The more given points, the smoother the motion can be expected to
become, but the more space is required to store the trajectory.  Via points
have to be inserted often enough to insure that the arms are not required to
exceed their position, velocity, and acceleration limitations.

The answers found for WAVE are in general applied, with minor modification,
to AL: Trajectories are individually specified
for each joint, and each trajectory is composed of one or more
polynomials with respect to time.  When the arm is moving, the control code
compares the current location of each joint to the value of the
polynomial at the present time and issues corrective action to the motors.
This solution has several very nice properties.  One is that it is possible
\F1a priori\F0 to ensure that the maximum displacements, velocities, and
accelerations demanded of each joint comply with the capabilities of the
arm.  Another property is that the motion is smooth and can be guaranteed
to start and stop with zero acceleration.  A third property is that the
final trajectory specification is completely independent of servoing rate.
No interpolation need be done.  The algorithms used to construct the
trajectory, and some of the reasoning behind them, are discussed in the
section on trajectory calculation in the general discussion of the
compiler.

This decision has several disadvantages.  Sometimes the resulting
complexity of trajectory is unnecessary.  This is the case for minor
motions that only travel less than an inch.  Sometimes, on the other hand,
polynomials are insufficient.  This is the case for circular motions.  Such
a motion can only be approximated, albeit to arbitrary (if expensive)
accuracy, by polynomials.  Reliance on polynomials also imposes a
distinction between planning time and runtime, since the complexity of
calculating these polynomials leads to the decision to do it but once for
any motion, no matter how often that motion is to be repeated.
\.

\JThis is a fundamental design choice that has great repercussions on the
entire system.  For one, there need be no delay between motions, since each
trajectory has already been prepared.  The runtime system does not have to
be encumbered with the rather complex code necessary for trajectory
calculation.  On the other hand, such a decision requires that during
planning of trajectories, the destination values be available.  This means
that all variables that are to be used in destination
calculations must have planning values.  The structure needed to prepare
these planning values is not trivial, and even so is incapable of always
knowing what the value will be.  An example of this is if the destination
variable is conditionally set to one value or another, and then the arm is
asked to move to that value.  There is no way to know which value will
obtain when the code is actually executed.  Efforts have to be made to
allow the runtime system to make last-minute corrections to the
trajectories to take into account discrepancies between the planned and
actual values of destinations.  In a way, this last-minute
correction facility makes some of the careful
labor of the trajectory calculator useless.  A more serious philosophical
flaw with the separation of planning and run times is that it restricts the
adaptability of the system to respond to observed data.  Planned trajectories can be
so bad that they cannot be salvaged, although new ones could be prepared.
It also requires enormous space to store pre-planned trajectories, and if
new ones were always calculated on the spot, this space limitation would be
vastly reduced.

Occasions arise in which it is not so much desired that the arm
acheive a destination as that it follow a particular path.  This may
be for the sake of avoiding obstacles, but occurs also in such
tasks a turning cranks or penning letters.  The use of via points
is of great assistance in specifying such motions, but it seems
an artificial construct.  AL has a "tracing" type motion (which
is currently unimplemented) that allows the path itself to
be described as a locus in real space moving with time.  The trajectory
planner translates this into a set of via points, set close enough
together to guarantee that the resulting polynomial stays within
given tolerances of the desired trajectory.  Since we have not
implemented this feature, it is not at all clear how successful
it will be.  There is some danger that frequent via points
can lead to instabilities in the spline solution.

A list of set points might be preferable to polynomials, in that absolutely
arbitrary motions would be possible,
including the tracing motion just mentioned.
 This list could consist of just a
few set points and the requirement that the arm travel through 
(or perhaps near) these, and
elsewhere "do the right thing".  Polynomials embody a particular preplanned
idea of what that right thing may be, and they are indeed formed between
set points, namely, the initial, intermediate, and final positions for each
joint.  Other meanings for the right thing could be straight-line motion in
real space, or evasive action to avoid known obstacles.  It could mean
"move as fast as you can, taking care to be able to slow down in time to
turn corners and stop." Unfortunately, such sophisticated requirements
impose too much labor on the runtime system, which would need to
simultaneously service six joints and determine the exact trajectory based
on all sorts of data.  We are limited in this by the capacity of the
computers we use.  Some of these "right things" often turn out not
to be right at all.  Linear sp`∂ial motion can impose
unnecessary motion on some joints, especially rotary (as opposed to
prismatic) joints; it is therefore not always the most efficient motion.
Obstacle avoidance has long been a difficult problem, and even the
seemingly easier problem of collision detection seems to require more
computing power than we can currently apply at runtime.  Neither reliance on
polynomials nor straight-line motion of the manipulator tip attacks that
problem.  In this light, the reliance on polynomial trajectories is an
adequate, but not a complete, solution.  Increasing speed and
sophistication of the hardware (for example, a velocity servo, instead of
our torque-drive servo, would require less overall software computation)
will no doubt lead to a reevaluation of the decision to use polynomials.
\.

\,
\,This page is here temporarily.  Will go to end.

departure-approach.  simple collision av.


why do we not use insolvable arms
affixment unclear
\!head3(Indirect arm specification);

\F0\JIt is desired not merely to move the arm to a given destination, but
rather to move objects to destinations.  The most accessible object
is, of course, the arm itself.  But it often happens that some object is held in
the grasp of the arm, and it is this object that we wish to move to a
destination.  This raises several problems: What do we use to represent the
location of an object?  How do we model the affixment of an object to the arm?
How do we decide where the arm itself is to go?  What if we are wrong about the
relation of the object to the arm?

WAVE completely ignored the question of moving
objects instead of arms.  If you want to move an object, you
figure out what that means for the arm.  (Either you perform some computation or
actually put the object in the arm's grasp and move the assemblage to the
desired location, then ask for a readout of the current position.) It is
possible to satisfactorily program arm motions without any affixment feature,
but it is not easy.  It was decided that AL should make this often needed
feature a reality.
\.

\JThe first problem is how to represent an object.  The position of the arm is
represented by a frame variable.  It is reasonable to extend this concept to
represent any object.  The limitation with this generalization
is that neither the arm nor any
other ordinary object can be completely described by one position and one
orientation vector.  What we must mean when we say the arm is at some frame is
that a particular point on the arm (the one between the fingers) is at that
position, and the last link of the arm is oriented according to that
orientation.  Likewise, we shall assume that the user has associated some local
coordinate system with each object.  It is the position and orientation of that
coordinate system that is meant by the frame variable representing the object.
Often enough it is not the location of that coordinate system that we would
like to move elsewhere, but rather some point describable in that coordinate
system.  An example is to set a block on the table.  If the coordinate system is
attached to the upper corner of the block, we really may want the bottom of the
block, but not the origin of its local coordinate system, to end up on the
table.  This leads to the simple generalization of the affixment: The block is
affixed to the arm, and the special point in the bottom (describable as a
location and orientation in the block coordinate system) is affixed to the
block.  We really wish to move the latter to the table.

The next problem is how to represent the affixment of one object (say a block)
to a second (say an arm).  It is clear that we need some transform to associate
the two.  The block can always be found by application of the transform to the
current frame of the arm.  Should the user have access to this transform, or
should it be kept invisible to him?  What happens if he states that the block
has moved?  Ought this to change the transform, or the value of the arm's frame?
One might be hesitant to accept the latter (how could the arm have moved without
our knowledge?), but if affixment is to be generally allowed between any two
objects, we accept that perhaps the second one has been moved if the first is
observed to have moved.

The solution that we have adopted is to allow two flavors of affixment, termed
"rigid" and "nonrigid".  The former is symmetric.  If either "sister" object has
moved, it is assumed that the other has moved as well.  The latter is
non-symmetric.  There is a "mother" and a "daughter" object.  If the mother
moves, so does the daughter, but if the daughter moves, it is not assumed that
the mother does also.  The question of the availability of the transform is
somewhat puzzling.  If we allow the user to modify the transform linking two
sister objects, what shall we say about the new locations of the objects?  Which
one has moved?  In the nonrigid case, we are willing to allow the daughter to
move, so the question is not as serious.
\.

\JThe question of implementation also arises.  What sort of data structure is
needed to represent the various flavors of affixment?  This was not an easy
problem, because the save/restore cells of WAVE were not easily generalizable,
especially to the rigid case.  The outcome of our work was what is termed "graph
structure", because it connects the "nodes" representing objects by links that
represent the affixments.  The actual structure and algorithms are described
elsewhere.  The structure allows objects to be affixed to and detached
("defixed") from
each other, either in a rigid or a nonrigid sense.

Graph structure also has some very useful properties which were not originally
included in their motivation.  Value assignment to a variable can have arbitrary
side effects, including modifying the value of any other variable.  
The destination of a motion can be 
affixed to an object and the motion will go to the right
spot, even if that object has moved.  There
is even a foothold available for climbing the peak represented by continually moving
objects, those affixed somehow to a time variable.

Graph structure is not without its headaches.  It is a complicated mechanism
which has caused many fascinating implementation problems.  For one thing, it
does not follow the block structure which governs all variables.  It causes
certain problems during the planning phase since all destinations must be
available for the proper calculation of trajectories.  It must be accessible
during runtime in order to modify a trajectory that has as a destination some
variable whose value has changed due to motion of its sister or mother.  Chains
of affixments require special care, both in evaluation of values and in removal
of affixment links.

With all the power of the graph structure, one might hope that it would suffice
for modelling arbitrary conglomerations of objects, such as those that might
arise during assembly of a complex piece of machinery.  Unfortunately, this is
not the case.  Some real-world affixments are neither rigid nor nonrigid in the
way we have specified.  For example, two rods attached to each other at a pivot
joint cannot be said to have either relationship.  It is possible to move one
without moving the other as long as the joint is loose enough and the motion
leaves the pivot location unchanged.  In those cases where the pivot position is
changed, it is in general not possible to calculate the new position of the
other object, unless the new pivot angle is somehow available.  This problem
generalizes to the case of limp objects, like rope or cloth,
with which even huans have difficulties.
AL is nowhere near such a general model of objects.
\.


\,
\!head2(GENTLE MOTIONS);

\J\F0A second major purpose of arm control might be termed "gentle" motion.
One form may be termed "contact motion", in which the arm should stop after
contact of some sort is made.  For example, it may be intended to position
the arm in proximity to some resisting obstacle without attempting to go
through it.  This case arises most often in an attempt to determine
locations of objects by feeling them.  If the location of an object is
known roughly, some sort of search may result in the arm being positioned
in a known configuration with respect to the object.  If the location of
the arm is known, then so is the location of the object.  Another form of
gentle motion is "slow motion", which is to be preferred in proximity to
delicate objects (including humans).

Some of the worry about hitting objects is that the arms and objects we use
are not highly calibrated.  We therefore are unsure exactly how far to move
the arm to achieve a task.  This inherent lack of calibration will hold for any
mechanical manipulator system.
A greater worry is that the external world is subject to variations that
are outside the control (and often the knowledge) of the manipulator program.
Objects are not placed where they are expected to be, or perhaps it is not
clear at the outset exactly where to expect some object.

We have decided to achieve gentle motion not
by demanding an unreasonable level of calibration
or perfect knowledge of the external world, but rather by
introducing careful and adaptive behavior into the execution of trajectories.
\.

\,
\!head3(Velocity control);

\F0\JGentle motion requires several capabilites.  The first is that the
speed of motion must be under the control of the user.  How is he to
specify the speed?  By velocity bounds, or as an overall constant velocity?
What one would ideally like would be to easily bound the speed of the
manipulator tip through space either above or below or both.  Then
constant-speed motion would be easy to stipulate.  Even harmonic motion
would be possible, if the constraints were allowed to vary with time.  Such
generality, however, poses several threats.  It is not good to allow the
user to cause the arm to become unstable (just as a human is ordinarily
unable to induce a fit of violent trembling in his own arm).  The previous
decision to represent motions by means of low-order polynomials in joint
space means that it is very difficult to translate velocity upper/lower
bounds from real space into constraints on the polynomials.  The
implementation of the trajectory calculator would become far more complex,
and the algorithm employed would very likely be iterative.  And in the
final analysis, such a general capability seems hardly necessary in the
specific uses to which it is to be put.  "Move slowly", in fact, seems
often to be a perfectly adequate specification.

The technique used in WAVE is to allow the total time of each motion to be
specified.  The duration can be defaulted to some "optimal" time that the
system will determine on the basis of known joint capabilities and the
distance that each must travel.  AL uses exactly the same method, with
slight generalization.  It is possible to specify intermediate points in
the motion.  For every segment of the total motion, AL allows the user to
say how long that segment should take, either as a lower, an upper, or an
exact bound.  The total motion may also be so bounded.  AL complains if
it believes that the arm is not capable of traveling at the velocities thus
specified, but it prepares the trajectory anyway.  This does not violate
the philosophy that the user should not be able to render the arm unstable,
because it will not introduce oscillations into the trajectory.  If the arm
control code is unable to force the arm to follow the trajectory because it
cannot move fast enough or with enough acceleration, the arm will be
brought to rest.  AL also complains, of course, if the various time
constraints for individual segments are inconsistent with the time
constraint for the entire motion.  The other generalization of WAVE's
velocity control is that the AL user can stipulate an exact, real-world
velocity at the intermediate points.  In WAVE, only the start/finish points
are velocity constrained (to zero velocity).  This velocity applies only to
position, not to orientation.

The decision outlined above has certain defects.  For one, it does not
allow the user to insert a variable in the time constraint.  That means
that a motion, once planned, cannot be executed at various speeds.
Furthermore, the system-calculated optimal time is not available to the
user.  He cannot say "take twice as long as you think you need".  This
limitation is not inherent in the decision; it is rather a detail of the
particular implementation.  If a need for such specifications is
discovered, it will be easy to implement it.  Another defect is just that
it is not a general solution.  It simply does not allow the user to say
that he wants a motion to have a peak velocity of 10 centimeters per
second, for example.  Even though absolute velocities may be specified at
given points, accelerations may not be so specified (although this turns
out also to be an implementation detail).  The coupling between velocity
and time specifications is not well thought out in the trajectory planner.
It is quite possible for the user to accidentally specify wild trajectories
(that is, trajectories with bad "overshoot" properties) by combining these
constraints.  Further experience with AL will help to determine whether the
current, limited methods need to be made more sophisticated.
\.


\,
\!head3(Force Feedback);

\F0\JGentle motion requires more than velocity control.  Say that the user
wishes to move the arm down until it rests on a block of unknown height.
Moving slowly does not suffice, since the arm may then try to slowly force
itself into the block.  This situation arises since there is no \F1a
priori\F0 information on how far the arm should move.  One could 
wait until the arm control code notices excessive force on the arm and
causes it to stop, but this action should be treated as a failsafe
mechanism, not as a technique for gentle motion.

What is required is feedback which the user can employ to decide whether the
motion should be terminated.  This feedback should be generally available
during the course of a motion.  This raises many questions.  What sort of
feedback is needed?  What sort can actually be implemented?  How can the
program be made aware of the current feedback values?  How can it take action
simultaneously with arm motion?  What range
of actions should be allowed?

The WAVE system allowed motions to have an associated "stop on force"
command.  One specifies the direction and magnitude of the expected force,
and the arm control code takes this request into account during the motion.
It is translated into a combined constraint on the forces for each joint,
and each time through the servo loop (WAVE servoed all the joints at one
go), the current force in the specified direction is compared to the limit.
When the limit is reached, the arm is halted without any error message.  Of
course, if some joint exceeds its particular physical force limit before
the motion is completed, the arm is halted with an error message.\.

\JIs this technique sufficient for gentle motions?  Experience has shown that
WAVE was capable of many delicate motions, including feeling whether the hand is
holding a screw or not by tapping it against the table.  However, WAVE
could not examine more than one force vector during the course of a motion,
nor could it stop on the absence of force.  For example, to put a pin through
a layer of cardboard requires that the motion be stopped as soon as the
force dies away.  One can imagine other constraints on the
force more complex than exceeding some value in some direction.

Another limitation to the WAVE system is that it dealt in a completely
different manner with touch sensors.  It was possible to specify that
activation of the touch sensors should stop the arm, but the method was
completely different from force checking.  Addition of new types of
sensors (for example, temperature or proximity sensors) would require more
special cases in WAVE.  It makes sense to have one unified programming
technique to deal with all sensory input instead of introducing modifications
in the structure of the system for every new device.

It is not always proper to respond to observed force by stopping the arm.
Perhaps the fact that a certain force has been achieved is to be
used to decide whether some later motion should be attempted.  In a system
for general planning of arm tasks, it is reasonable to ask that the user
program be made aware of whatever feedback is available and let it decide
what the appropriate action is.  WAVE was not completely restricted to stopping
the arm; the user could make conditional branches after a completed motion
based on whether or not expected forces had been felt.  However, action
could not be initiated during the motion.

What is desired is to be able to take arbitrary feedback during arm motions
and to cause arbitrary results based on that feedback.  The technique
employed by AL is called the "condition monitor".  It acts independently of
the main program, and periodically checks whatever condition it has been
set up to examine.  For example, it can find out the current force 
on the arm along some vector.  It can also check the touch sensors.  It
can examine variables in the program, or look at any other state
information.  On the basis of these observations, it can be decided whether
or not to trigger the conclusion, which could be to stop the arm, to assign
some value to a variable, or even to create some new condition monitor to
investigate something else.

This technique, which arises out of a need to perform gentle motions, turns
out to have great power and applicability to other situations, such as
coordinating motions of several arms and modifying motions in progress.
This is because it does not merely attach some special meaning to the
motion request, but rather starts an independent process which can have
arbitrary effect.  It is a means of programming side effects to arbitrary
situations.

The forms of available feedback from the real world (as
opposed to the internal world of variables and their values)
in our laboratory set-up are joint
reaction torques for each joint and the binary state of the touch sensor
on each finger.  The user has no real use for the joint reaction torques, for the
same reason that he has no need to specify the motion individually
for each joint.  Exactly the same arguments, both pro and con, apply
to this question.  Therefore, it has been our decision that the joint
reaction torques are always translated to the component of force along
and about some given vector before being made available to the user program.
\.

\,
\!head3(Force Application);

\F0\JConsider the task of wiping a pane of glass with a cloth.  In order not
to slip away from the glass, or, even worse, to go through it, it is either
necessary to have precise control over the hand position (both at
set-points and between them) or to apply a small force in the direction of
the glass.  The state of the art in manipulator design
and the limited knowledge of the external world do
not permit the
first alternative, nor does our choice of polynomials for trajectories
allow the required precision of positioning in the middle of the motions.
The second alternative seems preferable.
Once it has been decided to allow a facility to apply forces, many other
applications arise.  WAVE has been quite useful in demonstrating that
insertion of screws and other assembly tasks are made easier if one can
require that forces be applied during the motion. 

The default is to say nothing about forces.  However, this is also
a specification, namely, it
requires that whatever force impinges upon the arm
should be met by equal force in order to maintain fidelity to the
trajectory.  If one states that, say, a constant force of 10 grams should
be applied in the downward direction during a motion, that means that no
particular effort should be made to make the motion comply \F1in that
direction\F0 with the trajectory given.  If the arm finds no retarding
force in the environment, it will move downward until it does hit
something, and will then try to exert a constant 10 grams.  If the object
is moving, the arm will track it.  Another special case of application of
forces is to require that no force at all be applied in a given direction.
This causes the arm to be free of all trajectory control in that direction and
capable of being driven by external forces.  This technique is useful to
seat pins in holes.  One specifies freedom (that is, zero force) in the two
horizontal directions, and slight downward force in the vertical direction.
\.

\JWAVE provided this capability by means of two separate constructs, one for
freedom and one for forcing.
The discussion above makes it clear that freedom and forcing are really
both cases of a general idea, and so AL combines them.  AL allows part
of a motion specification to be a force application.  There can be up
to six of these in any motion, three for orthogonal spatial directions
and three for torques about three orthogonal directions.
One question arises:  Should these directions be stated in terms of
hand coordinates or station coordinates?  The advantage of the first
is that long motions can change the orientation of the hand, and it
may be desired to have the direction of force change concomitantly.
The advantage of the second is just the opposite:  during such motions,
it may in fact be desired not to have the direction of force change
with the hand orientation.  The solution in WAVE was that force
was specified in hand coordinates, and freedom in station coordinates.
AL specifies both in hand coordinates.  This is merely an implementation
restriction; it is quite conceivable that the specification will be
made more general.  Very likely the proper answer is to allow specification
in the coordinate system of any frame, and if that frame is being moved
by the active arm, the directions specified should move as well.

The implementation of force control is an interesting problem.  The
basic method is to modify the feedback loop for individual joints
to override corrective action taken for position errors  and to replace
that action with an additional term corresponding to the incremental
force to be added in the given joint.  As the arm moves, the share
of the effort allotted to each joint changes, since that figure
is heavily dependent on arm configuration.  Thus there must be some
periodic updating of the chosen joints and the size of their force
terms.  

Certain stability problems can arise when forces are applied.  Perhaps
the arm is holding one end of a spring.  The application of a force
may lead to an oscillation.  Another typically bad example is holding
a partially submerged object.  These cases will most likely also cause
problems even in the absence of force application.  Another source of
instability would arise if the force application specification were
allowed to change with time.  One could then easily ask for oscillatory
motions.  Therefore we have not permitted the force to change during
the course of the motion.  We are not sure how useful
such specification might be.
\.

\,
\!head3(Adaptive Control);

\JSome tasks require that motion be modified to respond to external
forces.  The standard example is to put a phonograph platter on a
spindle using only one hand, when the spindle's location is not known
exactly.  As the platter tilts to one side, the torque exerted on the
hand can be used to modify the understanding of where the spindle is.
The greater the torque, the farther the platter is from being
centered.  Such a task can certainly be solved by making small
corrections to the platter position, stopping the arm, and computing
the next correction, but it may be preferable to have one motion that
exhibits adaptive behavior.

The requirements for such behavior are that the feedback be available
to a parallel process while the arm is in motion, and that that
process have the capability of modifying the trajectory.

AL already has the condition monitor and a means to gather feedback.
What remains is a capability for modifying an active trajectory.  The
current AL does not have this, and there are some reasons why it may
be dangerous.  If only the arm control code ever moves an arm, there
is some safety from ridiculous requests that can lead the arm astray.
If a user program is allowed to modify the servo loop (and this is the
effect of modifying an active trajectory), this security is
diminished.  One suggestion is to provide a cell, called the "nudge",
that the user may modify for any joint.  The destination of a motion
is the stated destination plus the nudge.  If the nudge is not zero,
some fraction of it (depending on the progress of the motion with
respect to time) is added to the set point.  Some damping will prevent
sudden jumps as the nudge is modified.  Hopefully, the damping will
also protect the arm from oscillation.

We hope to investigate this form of adaptive control using the AL
system.  It will be of interest to know the utility and reliability of
such control.  This research will lead to a better understanding of a
host of control issues.
\.

\,
\!head2(CONTROL OF SEVERAL DEVICES);

\F0\JIn situations where more than one manipulator is available,
there should be a way to make use of the capabilities of each device
in both sequential and simultaneous fashion.  Parallel control of devices can
be for independent tasks or for related tasks.  Various degrees
of coordination require various capabilites.
\.

\,
\!head3(Sequential control);

\F0\JIf there are several manipulators, one would like to be able to
program them all.  The same program should be able to first move one, and
then another of the available devices.

When WAVE was first developed, the only devices were the yellow arm and its
hand, which were considered separately.  An electric screwdriver and a
pneumatic vise were subsequently added.  WAVE had separate syntax for each
of these devices, and they were all accessible to the same program.  When a
second arm was installed, two different kinds of WAVE were distinguished,
YELLOW and BLUE, each knowing only how to control its own arm and fingers,
but having access to all the other devices.  This meant that the same
program could not run both arms.  In order to perform multiple-arm tasks,
both versions of WAVE had to be run at once as separate processes in the
timesharing computer.  There was a simple communication provided that would
cause one process to pause and wait for the other to pause.  At this time,
the first process would resume.  This allowed programming of serial two-arm
tasks, including the assembly of a hinge.

Such a solution requires more communication complexity to control more than
two arms.  It suffers from the fact that a program for a multi-arm task
must be separated into several programs and run with different copies of
the WAVE system.

AL is designed to handle arbitrary intermixtures of devices.  Each arm has
a name, and motions statements identify the intended device by this name.
An attempt has been made to unify control of such devices as the fingers,
the screwdriver, and the vise, as well as whatever devices may be invented,
all into one general syntax.  This syntax is close to the syntax for the
arms themselves, and it may well be that some future system will complete
this standardization to allow any controllable device to be programmed in a
uniform fashion.

Again the question arises regarding how much detailed information should be
available to the user concerning the physical structure of the devices he
uses.  It is certain that the planner stage of the system must know these
peculiarities (like how fast each joint can go, and where each joint should
be to position the terminus of the arm at a given spot).  In line with
other decisions taken in AL, it has been agreed that the user is given no
device-specific information.  This imposes a limitation, to be sure, but
makes for greater portability of programs and does not seem to gravely
reduce the flexibility of the system.
\.

\,
\!head3(Independent parallel control);

\F0\JWhen several devices are available, it becomes reasonable to require
more than one to act concurrently.  A standard example is to close the
hand while the arm is in motion.  Less obvious examples are to open the
vise while the arm picks up an object, and even to have two arms working at
the same time.

WAVE allowed only a rudimentary form of parallel control.  There was a
"merge" construct that could cause several device control statements to
begin execution simultaneously.  Since the two arms both depended on the
same hardware, it was physically impossible to activate them at the same
time.  For this reason, the use of separate copies of WAVE was adequate.
Since parallelism was only possible
at the single motion level, entire parallel programs were not feasible.

True independent parallel control requires that the
hardware allow each device to be active independently of whether some
other combination of devices is active and that there be a way to divide
program execution so it proceeds in two ways at once.
AL attempts to fulfill these requirements by having a "divide program"
primitive called "cobegin" (modeled after the ALGOL "begin" and suggested
by Brinch-Hansen, among others) that divides a section of subsequent
program into several threads meant to be simultaneously executed.  The main
program waits for each of the parallel threads to terminate before
rejoining them into a common rope.  Since any thread may include device
control statements, independent parallel control is achieved.  The hardware
interfaces connecting the manipulators to the computer have been designed
to allow such parallel action.

Such a solution presents some problems of interlocking.  It is feasible for
two threads to simultaneously require motion from the same device.  That
such a situation might arise can be predicted during trajectory planning,
but not with certainty.  Therefore the runtime system must protect itself
against such behavior by explicitly making arm control code non-reentrant,
that is, preventing the same joint to be servoed by two processes
simultaneously.  Another problem of the same flavor, but of much greater
complexity, is the danger that two moving devices will collide.  Of course,
even if only once device is in operation, it can collide with unexpected
obstacles, but it seems reasonable to hope that if obstacles are known
beforehand, trajectories can be planned to avoid them.  (This "reasonable
hope" has never been realized at runtime, nor economically at planning
time, by the way.) The problem is compounded by
the possibility that the objects at risk can be moving around due to the
ministrations of an independent piece of code.  It seems that such a situation
requires collision avoidance to be an integral part of the motion code, so
that the most recent understanding of obstacle locations might be used.
Some sort of proximity sensor might also be useful.  In any case, AL has so
far chosen to ignore the problem.
\.

\,
\!head3(Synchronized parallel control);

\F0\JTo have several program threads executing manipulator control code is
not sufficient for many tasks.  There needs to be some way to coordinate
the various threads so that one will wait, if necessary, for the completion
of a requisite action by another.  One could accomplish such programming by
terminating the cobegin split, that is, by performing a join, but this is
unappealing on the grounds that it gives rise to non-locality of
programming.  If one thread is to move the vise, and the other to control
the arm, it seems wrong to have to join them and resplit them in order to
achieve synchronization.  Another possibility is to have some variable act
as a flag.  One process queries it, the other one sets it when it has
accomplished its task.  
Such "busy waiting" can waste machine cycles, and can actively prevent
the awaited completion of the parallel process.

WAVE never had to attack this problem, since it did not have
parallel control.  However, the pause facility for communication between
the two WAVES, YELLOW and BLUE, is a primitive synchronization capability of
the sort we need.

AL is designed to use the "signal" and "wait" event structure first
described by Dijkstra, and since generalized by others, notably
Brinch-Hansen, Hoare, and Peterson.  AL does not use the full power of
events, but does allow the program to define event variables and to signal
them and wait on them.  Since there is an arbitrary number of such events,
different sorts of synchronization can be programmed, whereas WAVE only had
one kind, the pause.

One limitation of the event mechanism is inherent in AL: there is no way to
use it to insure that two motions actually begin simultaneously.  Threads
that are awakened after a wait get some priority, but there is enough
computation that precedes a motion to render exact timings unreliable.  The
fact that we are working with only one processor for the runtime implies
that this sort of timing consideration will persist.
\.

\,
\!head3(Coordinated parallel control);

\JMultiple-arm tasks can include those for which it is necessary
that several arms work at the same time in close coordination.
For example, it might require two arms to lift a heavy object or
to stabilize an object suspended from a rope.  Humans ordinarily
function with two arms simultaneously in close coordination.

WAVE could not run both arms simultaneously, so it never dealt
with this problem.  AL has been designed to use several devices,
so it was necessary to give this problem some thought.  The solution
we arrived at seems weak.  It is to allow motion specifications
with paired destinations and via points.  For example, the
two arms can be controlled under the same motion statement, but
they go to different destinations.  They therefore are forced
to take the same time for their respective motions.  If they
are to assist each other in lifting a rigid object, one of
them should be freed (that is, a force of zero should be applied)
in the direction between the arms. 

One of the reasons the tracing form of trajectory specification was put into AL
is to assist in multiple arm motions.  One arm moves to a destination
in the usual way, and the other traces a spatial curve based on the planned
location of the first throughout its motion.  It will not be clear
how successful this approach is until it has been tested.  

Adaptive control of one arm based on the sensory feedback gathered
by the other can also be quite useful.  One arm might hold a spring-loaded
door open for the other.  If the second feels that it is binding, the first
should apply more pressure.  This task requires that adaptive control
of forces, as well as positions, be available.  Future research will
certainly concentrate on this problem.
\.


\,
\!head2(EASE OF USE);

\JA system for manipulator control should be designed so that it
is easy to accomplish tasks.  This ease should obtain throughout
the solution process.  The stages of that process can be enumerated
as 1) Understanding the problem, 2) Formalizing the techniques for
solution, 3) Preparing a program to accomplish the solution with
those techniques, 4) Planning the trajectories needed in that
program, 5) Executing the plan to discover errors in data, technique,
application, and reliability, 6) Iterating previous steps until
the program works, 7) Using the program as many times as desired.

This breakdown of the solution process is based on the assumption
that some sort of programming language is to be used and that
there is a distinction between planning time and runtime.  These
assumptions hold for both WAVE and AL.  The fact that there
is a programming language assists in the formalization mentioned
in steps 2 and 3.  The purpose of the compiler is to accomplish
step 4, and the runtime system is designed to assist in step 5.

Discussion of features particularly designed to assist the user
in progressing through the stages outlined above is outside
the scope of this document.  It is, however, an important subject,
since the acceptability of any system for manipulator control
depends on such features.
\.